182 research outputs found

    Liveness-Based Garbage Collection for Lazy Languages

    Full text link
    We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data---a subset of reachable data---during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. While this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated (i.e., in WHNF) and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi

    Refactoring smelly spreadsheet models

    Get PDF
    Identifying bad design patterns in software is a successful and inspiring research trend. While these patterns do not necessarily correspond to software errors, the fact is that they raise potential problematic issues, often referred to as code smells, and that can for example compromise maintainability or evolution. The identification of code smells in spreadsheets, which can be viewed as software development environments for non-professional programmers, has already been the subject of confluent researches by different groups. While these research groups have focused on detecting smells on concrete spreadsheets, or spreadsheet instances, in this paper we propose a comprehensive set of smells for abstract representations of spreadsheets, or spreadsheet models. We also propose a set of refactorings suggesting how spreadsheet models can become simpler to understand, manipulate and evolve. Finally we present the integration of both smells and refactorings under the MDSheet framework.Part funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT - Fundação para a Ciência e a Tecnologia within projects FCOMP-01-0124-FEDER-022701 and Network Sensing for Critical Systems Monitoring (NORTE-01-0124-FEDER-000058), ref. BIM-2013 BestCase RL3.2 UMINHO

    Survey and analysis of vegetation and hydrological change in English dune slack habitats

    Get PDF
    This report details work conducted under a Memorandum of Agreement (MoA) between Natural England and the Centre for Ecology and Hydrology and British Geological Survey, initiating a programme of linked vegetation and hydrological studies. As stated in the MoA, the overall aim of this collaboration is ‘To improve the conservation status of dune wetlands of European importance and the condition of the dune wetland features of sand dune SSSIs identified in Appendix 1, through a major improvement in understanding of dune ecohydrological functioning.’ In consultation with Natural England, nine sites were selected for survey of the dune wetland component in the summer of 2012, aiming to partially repeat the Sand Dune Survey of Great Britain (SDGB) in the late 1980s. The selected sites hold 77% of the English dune wetland resource. At each site, all dune wetlands were mapped, with the exception of the three sites on the Sefton Coast, where the majority of the core area was mapped but not satellite areas. For each site the geological setting, surface topography, surface water features, climatic setting and land cover were assessed and the most important hydrological processes identified

    Worker/wrapper/makes it/faster

    Get PDF
    Much research in program optimization has focused on formal approaches to correctness: proving that the meaning of programs is preserved by the optimisation. Paradoxically, there has been comparatively little work on formal approaches to efficiency: proving that the performance of optimized programs is actually improved. This paper addresses this problem for a general-purpose optimization technique, the worker/wrapper transformation. In particular, we use the call-by-need variant of improvement theory to establish conditions under which the worker/wrapper transformation is formally guaranteed to preserve or improve the time performance of programs in lazy languages such as Haskell

    Repetitive Reduction Patterns in Lambda Calculus with letrec (Work in Progress)

    Full text link
    For the lambda-calculus with letrec we develop an optimisation, which is based on the contraction of a certain class of 'future' (also: virtual) redexes. In the implementation of functional programming languages it is common practice to perform beta-reductions at compile time whenever possible in order to produce code that requires fewer reductions at run time. This is, however, in principle limited to redexes and created redexes that are 'visible' (in the sense that they can be contracted without the need for unsharing), and cannot generally be extended to redexes that are concealed by sharing constructs such as letrec. In the case of recursion, concealed redexes become visible only after unwindings during evaluation, and then have to be contracted time and again. We observe that in some cases such redexes exhibit a certain form of repetitive behaviour at run time. We describe an analysis for identifying binders that give rise to such repetitive reduction patterns, and eliminate them by a sort of predictive contraction. Thereby these binders are lifted out of recursive positions or eliminated altogether, as a result alleviating the amount of beta-reductions required for each recursive iteration. Both our analysis and simplification are suitable to be integrated into existing compilers for functional programming languages as an additional optimisation phase. With this work we hope to contribute to increasing the efficiency of executing programs written in such languages.Comment: In Proceedings TERMGRAPH 2011, arXiv:1102.226

    Type-based allocation analysis for co-recursion in lazy functional languages

    Get PDF
    This paper presents a novel type-and-effect analysis for pre-dicting upper-bounds on memory allocation costs for co-recursive def-initions in a simple lazily-evaluated functional language. We show thesoundness of this system against an instrumented variant of Launch-bury’s semantics for lazy evaluation which serves as a formal cost model.Our soundness proof requires an intermediate semantics employing indi-rections. Our proof of correspondence between these semantics that weprovide is thus a crucial part of this work.The analysis has been implemented as an automatic inference system.We demonstrate its effectiveness using several example programs thatpreviously could not be automatically analysed.Postprin

    Reasoning with the HERMIT: tool support for equational reasoning on GHC core programs

    Get PDF
    A benefit of pure functional programming is that it encourages equational reasoning. However, the Haskell language has lacked direct tool support for such reasoning. Consequently, reasoning about Haskell programs is either performed manually, or in another language that does provide tool support (e.g. Agda or Coq). HERMIT is a Haskell-specific toolkit designed to support equational reasoning and user-guided program transformation, and to do so as part of the GHC compilation pipeline. This paper describes HERMIT’s recently developed support for equational reasoning, and presents two case studies of HERMIT usage: checking that type-class laws hold for specific instance declarations, and mechanising textbook equational reasoning

    The HERMIT in the machine: a plugin for the interactive transformation of GHC core language programs

    Get PDF
    The importance of reasoning about and refactoring programs is a central tenet of functional programming. Yet our compilers and development toolchains only provide rudimentary support for these tasks. This paper introduces a programmatic and compiler-centric interface that facilitates refactoring and equational reasoning. To develop our ideas, we have implemented HERMIT, a toolkit enabling informal but systematic transformation of Haskell programs from inside the Glasgow Haskell Compiler’s optimization pipeline. With HERMIT, users can experiment with optimizations and equational reasoning, while the tedious heavy lifting of performing the actual transformations is done for them. HERMIT provides a transformation API that can be used to build higher-level rewrite tools. One use-case is prototyping new optimizations as clients of this API before being committed to the GHC toolchain. We describe a HERMIT application - a read-eval-print shell for performing transformations using HERMIT. We also demonstrate using this shell to prototype an optimization on a specific example, and report our initial experiences and remaining challenges
    • …
    corecore